28,917 research outputs found

    High-severity wildfire leads to multi-decadal impacts on soil biogeochemistry in mixed-conifer forests.

    Get PDF
    During the past century, systematic wildfire suppression has decreased fire frequency and increased fire severity in the western United States of America. While this has resulted in large ecological changes aboveground such as altered tree species composition and increased forest density, little is known about the long-term, belowground implications of altered, ecologically novel, fire regimes, especially on soil biological processes. To better understand the long-term implications of ecologically novel, high-severity fire, we used a 44-yr high-severity fire chronosequence in the Sierra Nevada where forests were historically adapted to frequent, low-severity fire, but were fire suppressed for at least 70 yr. High-severity fire in the Sierra Nevada resulted in a long-term (44 +yr) decrease (>50%, P < 0.05) in soil extracellular enzyme activities, basal microbial respiration (56-72%, P < 0.05), and organic carbon (>50%, P < 0.05) in the upper 5 cm compared to sites that had not been burned for at least 115 yr. However, nitrogen (N) processes were only affected in the most recent fire site (4 yr post-fire). Net nitrification increased by over 600% in the most recent fire site (P < 0.001), but returned to similar levels as the unburned control in the 13-yr site. Contrary to previous studies, we did not find a consistent effect of plant cover type on soil biogeochemical processes in mid-successional (10-50 yr) forest soils. Rather, the 44-yr reduction in soil organic carbon (C) quantity correlated positively with dampened C cycling processes. Our results show the drastic and long-term implication of ecologically novel, high-severity fire on soil biogeochemistry and underscore the need for long-term fire ecological experiments

    Discrete Wigner functions and quantum computational speedup

    Full text link
    In [Phys. Rev. A 70, 062101 (2004)] Gibbons et al. defined a class of discrete Wigner functions W to represent quantum states in a finite Hilbert space dimension d. I characterize a set C_d of states having non-negative W simultaneously in all definitions of W in this class. For d<6 I show C_d is the convex hull of stabilizer states. This supports the conjecture that negativity of W is necessary for exponential speedup in pure-state quantum computation.Comment: 7 pages, 2 figures, RevTeX. v2: clarified discussion on dynamics, added refs., published versio

    Analysis of surface moisture variations within large field sites

    Get PDF
    A statistical analysis was made on ground soils to define the general relationship and ranges of values of the field moisture relative to both the variance and coefficient of variation for a given test site and depth increment. The results of the variability study show that: (1) moisture variations within any given large field area are inherent and can either be controlled nor reduced; (2) neither a single value of the standard deviation nor coefficient of variation uniquely define the variability over the complete range of mean field moisture contents examined; and (3) using an upper bound standard deviation parameter clearly defines the maximum range of anticipated moisture variability. 87 percent of all large field moisture content standard deviations were less than 3 percent while about 96 percent of all the computed values had an upper bound of sigma=4 percent for these intensively sampled fields. The limit of accuracy curves of mean soil moisture measurements for large field sites relative to the required number of samples were determined

    A New Measurement of Cosmic Ray Composition at the Knee

    Full text link
    The Dual Imaging Cerenkov Experiment (DICE) was designed and operated for making elemental composition measurements of cosmic rays near the knee of the spectrum at several PeV. Here we present the first results using this experiment from the measurement of the average location of the depth of shower maximum, , in the atmosphere as a function of particle energy. The value of near the instrument threshold of ~0.1 PeV is consistent with expectations from previous direct measurements. At higher energies there is little change in composition up to ~5 PeV. Above this energy is deeper than expected for a constant elemental composition implying the overall elemental composition is becoming lighter above the knee region. These results disagree with the idea that cosmic rays should become on average heavier above the knee. Instead they suggest a transition to a qualitatively different population of particles above 5 PeV.Comment: 7 pages, LaTeX, two eps figures, aas2pp4.sty and epsf.sty included, accepted by Ap.J. Let

    Risk and reliability assessment of future power systems

    Get PDF
    Liberalisation of electricity markets, changing patterns in the generation and use of electricity, and new technologies are some of the factors that result in increased uncertainty about the future operating requirements of an electric power system. In this context, planning for future investments in a power system requires careful consideration of risk and reliability, and of the metrics with which these are measured. This paper highlights the need for consideration of a broader class of approaches to risk and reliability that have hitherto tended not to be an explicit part of the system development process in the electricity industry. We discuss a high level conceptual model that shows sources of uncertainty and modes of control for system operators and planners and offers a broad-brush approach to highlight risks at the planning stage. We argue that there is a need for new risk-informed criteria to help evaluate the necessary investments in electricity transmission systems. We further argue that the risk models that are developed for this purpose need to take better account of overall societal impact than is captured by traditional measures such as loss of load probability and loss of load expectation; societal impact should take account of frequencies of events with different levels of consequences, distinguishing, for example, between multiple small events and a single large event. This leads to discussion of a “disutility criterion” which has been previously studied in a health and safety context to distinguish between risk aversion and disaster aversion. This approach is new in the context of power systems

    Macroscopic modelling of the surface tension of polymer-surfactant systems

    Get PDF
    Polymer-surfactant mixtures are increasingly being used in a wide range of applications. Weakly-interacting systems, such as SDS/PEO and SDS/PVP, comprise ionic surfactants and neutral polymers, while strongly-interacting systems, such as SDS/POLYDMDAAC and C12TAB/NaPSS, comprise ionic surfactants and oppositely charged ionic polymers. The complex nature of interactions in the mixtures leads to interesting and surprising surface tension profiles as the concentrations of polymer and surfactant are varied. The purpose of our research has been to develop a model to explain these surface tension profiles and to understand how they relate to the formation of different complexes in the bulk solution. In this paper we shouw how an existing model based on the law of mass action can be extended to model the surface tension of weakly-interacting systems, and we also extend it further to produce a model for the surface tension of strongly interacting systems. Applying the model to a variety of strongly-interacting systems gives remarkable agreement with the experimental results. The model provides a sound theoretical basis for comparing and contrasting the behaviour of different systems and greatly enhances our understanding of the features observed

    OATS : Optimisation and Analysis Toolbox for power Systems

    Get PDF
    Optimisation and Analysis Toolbox for power Systems analysis (OATS) is an open-source simulation tool for steady-state analyses of power systems problems distributed under the GNU General Public License (GPLv3). It contains implementations of classical steady-state problems, e.g. load flow, optimal power flow (OPF) and unit commitment, as well as enhancements to these classical models relative to the features available in widely used open-source tools. Enhancements implemented in the current release of OATS include: a model of voltage regulating on-load tap-changing transformers; load shedding in OPF; allowing a user to build a contingency list in the security constrained OPF analysis; implementation of a distributed slack bus; and the ability to model zonal transfer limits in unit commitment. The mathematical optimisation models are written in an open-source algebraic modelling language, which offers high-level symbolic syntax for describing optimisation problems. The flexibility offered by OATS makes it an ideal tool for teaching and academic research. This paper presents novel aspects of OATS and discusses, through demonstrative examples, how OATS can be extended to new problem classes in the area of steady-state power systems analysis

    A method for the evaluation and optimisation of power losses and reliability of supply in a distribution network

    Get PDF
    This paper presents two methods for evaluating and optimizing the configuration of a distribution network. A new loss-optimization method is described which partitions, optimizes and then recombines the network topology to identify the lowest loss configurations available. A reliability evaluation method is presented which evaluates, on a load-by-load basis, the most effective restoration path and the associated time. In contrast to previously-reported methods, the operation of different types of switch is integrated into this approach, reducing dependency on pre-determined restoration times for each load each fault location. This provides a more accurate estimate of the outage durations through identification of the specific restoration method for each load under each fault condition. The optimization method applied is shown to be effective in identifying optimally-reliable network topologies. Significant benefits are shown to be available

    Capacity markets and the EU target model – a Great Britain case study

    Get PDF
    The growth of interconnection between national electricity markets is key to the development and competitive efficiency of the Single EU Market for Electricity. However, in parallel with the development of the Single Market, a growing number of EU Member States have implemented – or are in the process of developing – national Capacity Mechanisms in order to ensure future security of supply, which may distort the cross-border trade of energy across interconnectors and reduce total welfare. In particular, the Electricity Market Reform (EMR) legislative package recently brought in by the UK government introduced a Capacity Market (in which two rounds of auctions have taken place to date) for the provision of generation capacity from 2018. In order to ensure that such national markets do not distort the wider energy market, it is important that the role of cross-border capacity, and the availability of interconnector capacity, is correctly consolidated into such mechanisms. In the first annual GB auction the net contribution of interconnection was included on a conservative basis informed by historical data, and while interconnectors have since been permitted to bid into the Capacity Market at a de-rated value (in a similar manner to domestic generation), generators in other markets are still not able to explicitly participate. This may continue to introduce market distortions and adversely impact both short-term dispatch and long-term investment decisions in both the GB and neighbouring markets. A number of routes are available to resolve this through a mechanism to permit cross-border participation of generators, but this requires resolution of a number of complicating factors, not least a means for properly allocating transmission capacity without introducing further distortions to the energy market. Alternative solutions could be enacted at an EU-level, such as through the alignment of Capacity Mechanisms to a common model, or the introduction of an EU-wide single Capacity Mechanism, but the current regulatory focus appears to remain on resolution of such issues at a national level

    Synthesis of wind time series for network adequacy assessment

    Get PDF
    When representing the stochastic characteristics of wind generators within power system simulations, the spatial and temporal correlations of the wind resource must be correctly modelled to ensure that reserve and network capacity requirements are not underestimated. A methodology for capturing these correlations within a vector auto-regressive (VAR) model is presented, and applied to a large-scale reanalysis dataset of historical wind speed data for the British Isles. This is combined with a wind speed-to-power conversion model trained against historically metered data from wind farms on the Great Britain (GB) electricity system in order to derive a lightweight model for simulating injections of wind power across a transmission network. The model is demonstrated to adequately represent ramp rates, both at a site and network level, as well as the individual correlations between sites, while being suitable for network adequacy studies which may require the simulation of many years of operation
    • …
    corecore